With the rapid development of online education platforms represented by Massive Open Online Courses (MOOC), how to evaluate the large-scale subjective question assignments submitted by platform learners is a big challenge. Peer grading is the mainstream scheme for the challenge, which has been widely concerned by both academia and industry in recent years. Therefore, peer grading technologies for online education were survyed and analyzed. Firstly, the general process of peer grading was summarized. Secondly, the main research results of important peer grading activities, such as grader allocation, comment analysis, abnormal peer grading information detection and processing, true grade estimation of subjective question assignments, were explained. Thirdly, the peer grading functions of representative online education platforms and published teaching systems were compared. Finally, the future development trends of peer grading was summed up and prospected, thereby providing reference for people who are engaged in or intend to engage in peer grading research.
In order to select more reasonable relay nodes for message transmission and improve the efficiency of message delivery in opportunistic networks, message forwarding utility was designed, a corresponding message copy forwarding algorithm was proposed. Firstly, based on the historical encounter information of nodes, the indirect encounter probability of nodes and the corresponding time-effectiveness were analyzed, then a time-effectiveness indicator was proposed to evaluate the encounter information value. Secondly, combined with the similarity of node motion, the problem of message repeated diffusion was analyzed, a deviation indicator of node movement was proposed to evaluate the possibility of message repeated diffusion. Simulation results show that compared with Epidemic, ProPHET, Maxprop, SAW (Spray And Wait) algorithms, the proposed algorithm has better performance in delivery success rate, overhead and delay.
The traditional AVL (Adelson-Velskii and Landis) tree programming has been faced with the problem of too much code, complex process and high adjusting ratio. To solve these problems, a unified rebalancing method was developed and a generalized AVL (AVL-N) tree was defined. The unified rebalancing method automatically classifies the type of the unbalanced node in AVL tree and uses a new way to adjust the tree shape without using standard rotations. AVL-N tree with relaxed balance allows the height difference between the right sub-tree and left sub-tree doesn't exceed N(N ≥ 1). When insertions and deletions have been performed in AVL-N tree, the height difference between the right sub-tree and left sub-tree of some nodes may be higher than N. At that time the unified rebalancing would be applied to rearrange the unbalanced node's descendants. The simulation results indicate that the adjusting ratio of AVL-N tree reduced significantly with N increasing, it is less than 4% for N=5 and less than 0.1% for N=13. The adjusting ratio of AVL-N tree is far below other classic data structures, such as red-black tree, and allows for a greater degree of concurrency than the original proposal.
In traditional Proxy Re-Encryption (PRE), a proxy is too powerful as it has the ability to re-encrypt all delegator's ciphertexts to delegatee once the re-encryption key is obtained; And for more than one delegatees, delegator needs to generate different re-encryption key for different delegatee, which wastes a lot of resources in the calculation process. To solve these problems, an identity-based conditional proxy broadcast re-encryption was introduced. The delegator generated a re-encryption key for some specified condition during the encryption, like that the re-encryption authority of the proxy was restricted to that condition only. Moreover, the delegator's ciphertexts could be re-broadcasted to ensure the important communication and save a lot of computation and communication cost. Finally, the theoretical analysis verified the security of the scheme.
Through the analysis and research of reliability problems in the existing workflow scheduling algorithm, the paper proposed a reliability-based workflow strategy concerning the problems in improving the reliability of the entire workflow by sacrificing efficiency or money in some algorithms. Combining the reliability of tasks in workflow and duplication ideology, and taking full consideration of priorities among tasks, this strategy lessened failure rate in transmitting procedure and meantime shortened transmit time, so it not only enhanced overall reliability but also reduced makespan. Through the experiment and analysis, the reliability of cloud workflow in this strategy, tested by different numbers of tasks and different Communication to Computation Ratios (CCR), was proved to be better than the Heterogeneous Earliest-Finish-Time (HEFT) algorithm and its improved algorithm named SHEFTEX, including the superiority of the proposed algorithm over the HEFT in the completion time.